随机平滑被认为是针对对抗扰动的最先进的防御。但是,它大大利用了这样一个事实,即分类器将输入对象映射到类概率,而不专注于学习度量空间,在该度量空间中,通过计算距离嵌入类原型的距离来执行分类。在这项工作中,我们将随机平滑性扩展到绘制到归一化嵌入的几片学习模型。我们提供了此类模型的Lipschitz连续性的分析,并针对$ \ ell_2 $结合的扰动获得了稳健性证书,这些扰动可能在几次学习方案中很有用。我们的理论结果通过不同数据集的实验证实。
translated by 谷歌翻译
支持II社区的当前趋势,我们提出了一个称为融合大脑的AI Journey 2021挑战,这些挑战是融合大脑,该挑战是使普通架构处理不同的方式(即图像,文本和代码),并解决视觉和语言的多个任务。融合脑挑战https://github.com/sberbank- ai/fusion_brain_aij2021结合了以下特定任务:code2code翻译,手写文本识别,零拍摄对象检测和视觉问题应答。我们为每个任务创建了数据集以测试参与者的提交。此外,我们在俄语和英语中开设了一个新的手写数据集,其中包含94,130对图像和文本。DataSet的俄罗斯部分是世界上最大的俄罗斯手写数据集。我们还提出了基线解决方案和相应的特定于任务特定解决方案以及整体指标。
translated by 谷歌翻译
当前,提供鲁棒性证书的最流行方法是随机平滑,其中通过某些概率分布平滑输入。我们提出了一种新的方法,可以在乘法参数上随机平滑。使用此方法,我们可以证明相对于伽马校正扰动的稳健分类器,并将结果与通过其他平滑分布(高斯,拉普拉斯,均匀)获得的分类器进行比较。实验表明,不对称的雷利分布允许获得一些扰动参数值的更好的证书。据我们所知,这是关于对乘法伽马校正转换的认证鲁棒性的第一项工作,也是第一个研究不对称分布在随机平滑下的影响。
translated by 谷歌翻译
深度图像对小的Adver-Sarial投入扰动的脆性已被广泛研究了持续数年。然而,前面扰动的主要目标主要是限于通过不正确的一个更改了Cor-Rective预测的上1个,这不打算改变顶级k预测。在许多Digi-Tal真实世界的情景中,顶级Kpriction更加相关。在这项工作中,我们提出了一种快速准确的方法,使Top-Kadversarial示例的快速准确的方法是简单的多目标优化。我们通过将其与其他对抗性示例分泌技术进行比较来展示其功效安培。此外,基于这种方法,Wepropose Top-kuniversal对抗扰动,图像无关的微小扰动,导致真正的类别在大多数NAT-Ulal图像中的顶级Kpriction。我们通过实验表明我们的接近基线方法,甚至改善了寻找普遍对抗扰动的现有技术。
translated by 谷歌翻译
Many challenging reinforcement learning (RL) problems require designing a distribution of tasks that can be applied to train effective policies. This distribution of tasks can be specified by the curriculum. A curriculum is meant to improve the results of learning and accelerate it. We introduce Success Induced Task Prioritization (SITP), a framework for automatic curriculum learning, where a task sequence is created based on the success rate of each task. In this setting, each task is an algorithmically created environment instance with a unique configuration. The algorithm selects the order of tasks that provide the fastest learning for agents. The probability of selecting any of the tasks for the next stage of learning is determined by evaluating its performance score in previous stages. Experiments were carried out in the Partially Observable Grid Environment for Multiple Agents (POGEMA) and Procgen benchmark. We demonstrate that SITP matches or surpasses the results of other curriculum design methods. Our method can be implemented with handful of minor modifications to any standard RL framework and provides useful prioritization with minimal computational overhead.
translated by 谷歌翻译
We present a novel dataset named as HPointLoc, specially designed for exploring capabilities of visual place recognition in indoor environment and loop detection in simultaneous localization and mapping. The loop detection sub-task is especially relevant when a robot with an on-board RGB-D camera can drive past the same place (``Point") at different angles. The dataset is based on the popular Habitat simulator, in which it is possible to generate photorealistic indoor scenes using both own sensor data and open datasets, such as Matterport3D. To study the main stages of solving the place recognition problem on the HPointLoc dataset, we proposed a new modular approach named as PNTR. It first performs an image retrieval with the Patch-NetVLAD method, then extracts keypoints and matches them using R2D2, LoFTR or SuperPoint with SuperGlue, and finally performs a camera pose optimization step with TEASER++. Such a solution to the place recognition problem has not been previously studied in existing publications. The PNTR approach has shown the best quality metrics on the HPointLoc dataset and has a high potential for real use in localization systems for unmanned vehicles. The proposed dataset and framework are publicly available: https://github.com/metra4ok/HPointLoc.
translated by 谷歌翻译
This paper addresses the kinodynamic motion planning for non-holonomic robots in dynamic environments with both static and dynamic obstacles -- a challenging problem that lacks a universal solution yet. One of the promising approaches to solve it is decomposing the problem into the smaller sub problems and combining the local solutions into the global one. The crux of any planning method for non-holonomic robots is the generation of motion primitives that generates solutions to local planning sub-problems. In this work we introduce a novel learnable steering function (policy), which takes into account kinodynamic constraints of the robot and both static and dynamic obstacles. This policy is efficiently trained via the policy optimization. Empirically, we show that our steering function generalizes well to unseen problems. We then plug in the trained policy into the sampling-based and lattice-based planners, and evaluate the resultant POLAMP algorithm (Policy Optimization that Learns Adaptive Motion Primitives) in a range of challenging setups that involve a car-like robot operating in the obstacle-rich parking-lot environments. We show that POLAMP is able to plan collision-free kinodynamic trajectories with success rates higher than 92%, when 50 simultaneously moving obstacles populate the environment showing better performance than the state-of-the-art competitors.
translated by 谷歌翻译
Heuristic search algorithms, e.g. A*, are the commonly used tools for pathfinding on grids, i.e. graphs of regular structure that are widely employed to represent environments in robotics, video games etc. Instance-independent heuristics for grid graphs, e.g. Manhattan distance, do not take the obstacles into account and, thus, the search led by such heuristics performs poorly in the obstacle-rich environments. To this end, we suggest learning the instance-dependent heuristic proxies that are supposed to notably increase the efficiency of the search. The first heuristic proxy we suggest to learn is the correction factor, i.e. the ratio between the instance independent cost-to-go estimate and the perfect one (computed offline at the training phase). Unlike learning the absolute values of the cost-to-go heuristic function, which was known before, when learning the correction factor the knowledge of the instance-independent heuristic is utilized. The second heuristic proxy is the path probability, which indicates how likely the grid cell is lying on the shortest path. This heuristic can be utilized in the Focal Search framework as the secondary heuristic, allowing us to preserve the guarantees on the bounded sub-optimality of the solution. We learn both suggested heuristics in a supervised fashion with the state-of-the-art neural networks containing attention blocks (transformers). We conduct a thorough empirical evaluation on a comprehensive dataset of planning tasks, showing that the suggested techniques i) reduce the computational effort of the A* up to a factor of $4$x while producing the solutions, which costs exceed the costs of the optimal solutions by less than $0.3$% on average; ii) outperform the competitors, which include the conventional techniques from the heuristic search, i.e. weighted A*, as well as the state-of-the-art learnable planners.
translated by 谷歌翻译
This paper presents a class of new fast non-trainable entropy-based confidence estimation methods for automatic speech recognition. We show how per-frame entropy values can be normalized and aggregated to obtain a confidence measure per unit and per word for Connectionist Temporal Classification (CTC) and Recurrent Neural Network Transducer (RNN-T) models. Proposed methods have similar computational complexity to the traditional method based on the maximum per-frame probability, but they are more adjustable, have a wider effective threshold range, and better push apart the confidence distributions of correct and incorrect words. We evaluate the proposed confidence measures on LibriSpeech test sets, and show that they are up to 2 and 4 times better than confidence estimation based on the maximum per-frame probability at detecting incorrect words for Conformer-CTC and Conformer-RNN-T models, respectively.
translated by 谷歌翻译
Independence testing is a fundamental and classical statistical problem that has been extensively studied in the batch setting when one fixes the sample size before collecting data. However, practitioners often prefer procedures that adapt to the complexity of a problem at hand instead of setting sample size in advance. Ideally, such procedures should (a) allow stopping earlier on easy tasks (and later on harder tasks), hence making better use of available resources, and (b) continuously monitor the data and efficiently incorporate statistical evidence after collecting new data, while controlling the false alarm rate. It is well known that classical batch tests are not tailored for streaming data settings, since valid inference after data peeking requires correcting for multiple testing, but such corrections generally result in low power. In this paper, we design sequential kernelized independence tests (SKITs) that overcome such shortcomings based on the principle of testing by betting. We exemplify our broad framework using bets inspired by kernelized dependence measures such as the Hilbert-Schmidt independence criterion (HSIC) and the constrained-covariance criterion (COCO). Importantly, we also generalize the framework to non-i.i.d. time-varying settings, for which there exist no batch tests. We demonstrate the power of our approaches on both simulated and real data.
translated by 谷歌翻译